昨天我們講解完了ngin log的format,以及完整如何架構filebeat建構在local端的方式,而後端程式以及nginx拆成兩個container並用compose讓兩者互相連結。
同時昨天也提到,關於不同log特性,各要如何儲存。今天我們會提到如何透過三個container:nginx、flask、filebeat,直接在container層級將log往上拋,不需要透過本機去運作filebeat。
首先我們昨天提到,nginx的error log,以及實際上uwsgi拋出來的error log,實際上可能不同。而這兩者現在被我們拆成兩個container,如果我們兩邊各自生成自己的log,要如何讓兩邊各自知道彼此的log?
另一個更重要的是,如果filebeat container沒辦法拿到這兩者的log,要如何拋上elasticsearch?
這邊解決方式就是我們有非常簡單提到的volume,還記得之前我們是透過volume讓本機能夠mount container裡面的資料。而container之間,其實也可以透過volume去互相share,方式也非常簡單,其實就是一起設置相同的volume,如此一來每一個container都能夠得到彼此的資料
因此filebeat能夠拿到這兩者後,就能夠將不同container的Log,不管是nginx log ,system log,或甚至是兩個container的系統狀況,都能夠上傳。
因此首先我們需要先準備好filebeat的Dockerfile
Dockerfile
FROM docker.elastic.co/beats/filebeat:7.9.2
COPY filebeat.yml /usr/share/filebeat/filebeat.yml
USER root
RUN chown root:filebeat /usr/share/filebeat/filebeat.yml
USER filebeat
RUN filebeat modules enable nginx
COPY nginx.yml /usr/share/filebeat/modules.d/nginx.yml
這邊我們會需要兩個設定檔,一是filebeat.yml,另一個是nginx.yml。
其實就跟名字相同,filebeat.yml設置filebeat相關設定,以雲端來說,可以設定雲端的id以及密碼,就能夠接上elastic cloud,而本機也能夠設定elasticsearch以及kibana的port去連接;nginx.yml其實就是之前跟大家分享過,nginx的module,基本上這邊只須要改一下路徑,改成container內的路徑就能使用
filebeat.yml
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
# ============================== Filebeat inputs ===============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
  # Change to true to enable this input configuration.
  enabled: false
  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    # - /var/log/*.log
    - d:\syslog_ng\2020-07-18\*.log
  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']
  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']
  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']
  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1
  ### Multiline options
  # Multiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation
  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[
  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false
  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after
# ============================== Filebeat modules ==============================
filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml
  # Set to true to enable config reloading
  reload.enabled: false
  # Period on which files under path should be checked for changes
  #reload.period: 10s
# ======================= Elasticsearch template setting =======================
setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false
# ================================== General ===================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging
# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false
# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
# =================================== Kibana ===================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"
  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:
# =============================== Elastic Cloud ================================
# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
cloud.id: "<輸入你的cloud.id>"
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
cloud.auth: "<輸入你的密碼>"
# ================================== Outputs ===================================
# Configure what output to use when sending the data collected by the beat.
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]
  # Protocol - either `http` (default) or `https`.
  #protocol: "https"
  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"
# ------------------------------ Logstash Output -------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]
  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"
  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"
# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~
# ================================== Logging ===================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]
# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.
# Set to true to enable the monitoring reporter.
#monitoring.enabled: false
# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:
# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:
# ============================== Instrumentation ===============================
# Instrumentation support for the filebeat.
#instrumentation:
    # Set to true to enable instrumentation of filebeat.
    #enabled: false
    # Environment in which filebeat is running on (eg: staging, production, etc.)
    #environment: ""
    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200
    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:
    # Secret token for the APM Server(s).
    #secret_token:
# ================================= Migration ==================================
# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true
這部分特別要注意的就是cloud.id跟cloud.auth,分別輸入你的雲端服務id以及密碼
nginx.yml
# Module: nginx
# Docs: https://www.elastic.co/guide/en/beats/filebeat/7.9/filebeat-module-nginx.html
- module: nginx
  # Access logs
  access:
    enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths: ["/var/log/nginx/access.log*"]
  # Error logs
  error:
    enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths: ["/var/log/nginx/error.log*"]
  # Ingress-nginx controller logs. This is disabled by default. It could be used in Kubernetes environments to parse ingress-nginx logs
  ingress_controller:
    enabled: false
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
這邊主要修改的就是var.paths,我們一律改成volume裡面的/var/log/nginx
接著就是把這個服務加入compose
docker-compose.yml
version: '3.3'
services:
  flask:
    build: ./flask
    container_name: template_flask
    # restart: always
    environment:
      - APP_NAME=FlaskApp
    expose:
      - 8080
  nginx:
    build: ./nginx
    volumes:
      - "/c/Users/user/flask-nginx-elk-demo/nginx-logs:/var/log/nginx"
    container_name: template_nginx
    # restart: always
    ports:
      - "80:80"
    depends_on:
      - flask
  filebeat:
    build: ./filebeat
    volumes:
      - "/c/Users/user/flask-nginx-elk-demo/nginx-logs:/var/log/nginx"
    container_name: template_filebeat
    # restart: always
    depends_on:
      - nginx
前面的與前面幾天相同,唯一要新增的就是filebeat的service,而volume設置就是與nginx相同共用
如此一來docker-compose up --build後,你在本機不需要開設filebat,也能夠自動蒐集你服務的log囉!
明天我們再更深入的探討,以及講解一些不同變形